Sequential and Factorized NML models
نویسندگان
چکیده
Bayesian networks are among most popular model classes for discrete vector-valued i.i.d data. Currently the most popular model selection criterion for Bayesian networks follows Bayesian paradigm. However, this method has recently been reported to be very sensitive to the choice of prior hyper-parameters [1]. On the other hand, the general model selection criteria, AIC [2] and BIC [3], are derived through asymptotics and their behavior is suboptimal for small sample sizes.
منابع مشابه
Exchangeability Characterizes Optimality of Sequential Normalized Maximum Likelihood and Bayesian Prediction with Jeffreys Prior
We study online prediction of individual sequences under logarithmic loss with parametric constant experts. The optimal strategy, normalized maximum likelihood (NML), is computationally demanding and requires the length of the game to be known. We consider two simpler strategies: sequential normalized maximum likelihood (SNML), which computes the NML forecasts at each round as if it were the la...
متن کاملHigh Level Synthesis from Sim-nML Processor Models
The design of modern complex embedded systems require a high level of abstraction of the design. The SimnML[1] is a specification language to model processors for such designs. Several software generation tools have been developed that take ISA specifications in Sim-nML as input. In this paper we present a tool Sim-HS that implements high level behavioral and structural synthesis of processors ...
متن کاملNML, Bayes and True Distributions: A Comment on Karabatsos and Walker (2006)
We review the normalized maximum likelihood (NML) criterion for selecting among competing models. NML is generally justified on information-theoretic grounds, via the principle of minimum description length (MDL), in a derivation that “does not assume the existence of a true, data-generating distribution.” Since this “agnostic” claim has been a source of some recent confusion in the psychologic...
متن کاملUnsupervised Learning of Disentangled and Interpretable Representations from Sequential Data
We present a factorized hierarchical variational autoencoder, which learns disentangled and interpretable representations from sequential data without supervision. Specifically, we exploit the multi-scale nature of information in sequential data by formulating it explicitly within a factorized hierarchical graphical model that imposes sequence-dependent priors and sequence-independent priors to...
متن کاملEfficient Computation of NML for Bayesian Networks
Bayesian networks are parametric models for multidimensional domains exhibiting complex dependencies between the dimensions (domain variables). A central problem in learning such models is how to regularize the number of parameters; in other words, how to determine which dependencies are significant and which are not. The normalized maximum likelihood (NML) distribution or code offers an inform...
متن کامل